Goto

Collaborating Authors

 recurrent generative feedback


Neural Networks with Recurrent Generative Feedback

Neural Information Processing Systems

Neural networks are vulnerable to input perturbations such as additive noise and adversarial attacks. In contrast, human perception is much more robust to such perturbations. The Bayesian brain hypothesis states that human brains use an internal generative model to update the posterior beliefs of the sensory input. This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of an internal generative model and the external environment. Inspired by such hypothesis, we enforce self-consistency in neural networks by incorporating generative recurrent feedback.


Review for NeurIPS paper: Neural Networks with Recurrent Generative Feedback

Neural Information Processing Systems

Weaknesses: --Introduction Figure 1 does nothing for me. If not, how is the model different than the work that introduced those functions? Are these necessary to make the model work? What happens to model training when you use only one or two of the three losses you describe for training? How does that affect performance?


Review for NeurIPS paper: Neural Networks with Recurrent Generative Feedback

Neural Information Processing Systems

Three knowledgeable referees support acceptance for the contributions, notably for a proposed approach to extend CNNs with generative feedback, while one reviewer supports (marginal) reject. This paper was extensively discussed post-rebuttal -- especially in light of the fact that the initial evaluation on brainscore appeared to have been flawed and that the results on brainscore have not just changed quantitatively but also qualitatively. I also agree with R4 that the overall evaluation is not particularly compelling as a general model of object recognition (see R4 points) as opposed to maybe a narrower approach to build robustness to adversarial attacks. Overall, there appears to be sufficient support because of the novelty of the idea to accept this paper but all reviewers agreed that the quantitative evaluation on brainscore needs to be fixed and claims revised accordingly.


Neural Networks with Recurrent Generative Feedback

Neural Information Processing Systems

Neural networks are vulnerable to input perturbations such as additive noise and adversarial attacks. In contrast, human perception is much more robust to such perturbations. The Bayesian brain hypothesis states that human brains use an internal generative model to update the posterior beliefs of the sensory input. This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of an internal generative model and the external environment. Inspired by such hypothesis, we enforce self-consistency in neural networks by incorporating generative recurrent feedback.